尽管近年来从CT/MRI扫描中自动腹部多器官分割取得了很大进展,但由于缺乏各种临床方案的大规模基准,对模型的能力的全面评估受到阻碍。收集和标记3D医学数据的高成本的限制,迄今为止的大多数深度学习模型都由具有有限数量的感兴趣或样品器官的数据集驱动,这仍然限制了现代深层模型的力量提供各种方法的全面且公平的估计。为了减轻局限性,我们提出了AMO,这是一个大规模,多样的临床数据集,用于腹部器官分割。 AMOS提供了从多中心,多供应商,多模式,多相,多疾病患者收集的500 CT和100次MRI扫描,每个患者均具有15个腹部器官的体素级注释,提供了具有挑战性的例子,并提供了挑战性的例子和测试结果。在不同的目标和场景下研究健壮的分割算法。我们进一步基准了几种最先进的医疗细分模型,以评估此新挑战性数据集中现有方法的状态。我们已公开提供数据集,基准服务器和基线,并希望激发未来的研究。信息可以在https://amos22.grand-challenge.org上找到。
translated by 谷歌翻译
本文提出了一种新的方法,用于较小的类人机器人自我校准其脚力传感器。该方法由两个步骤组成:1。命令机器人以不同的双支持配置沿计划的全身轨迹移动。2.通过优化在机器人运动过程中,通过最大程度地减少测量和建模压力中心(COP)和地面反作用力(GRF)之间的误差来确定传感器参数。这是针对较小的人形机器人中的脚力传感器设备的第一个提议的自主校准方法。此外,我们引入了一种高准确的手动校准方法来建立COP地面真理,该方法用于使用自校准来验证测得的COP。结果表明,自校准可以准确估计COP和GRF,而无需任何手动干预。使用NAO类人动物平台和先前呈现的力感应鞋来证明我们的方法。
translated by 谷歌翻译
基于语音的投入在我们日常生活中获得了智能手机和平板电脑的普及,因为声音是人类计算机交互的最简单而有效的方式。本文旨在设计更有效的基于语音的接口,以查询关系数据库中的结构化数据。我们首先识别名为Speep-to-SQL的新任务,旨在了解人类语音传达的信息,并直接将其转换为结构化查询语言(SQL)语句。对此问题的天真解决方案可以以级联方式工作,即,自动语音识别(ASR)组件,后跟文本到SQL组件。然而,它需要高质量的ASR系统,并且还遭受了两种组件之间的错误复合问题,从而产生有限的性能。为了处理这些挑战,我们进一步提出了一个名为SpeepSQLNET的新型端到端神经结构,直接将人类语音转化为没有外部ASR步骤的SQL查询。 SpeemSQLNET具有充分利用演讲中提供的丰富语言信息的优势。据我们所知,这是第一次尝试根据任意自然语言问题直接综合SQL,而不是基于自然语言的SQL版本或其具有有限的SQL语法的变体。为了验证所提出的问题和模型的有效性,我们还通过捎带广泛使用的文本到SQL数据集来进一步构建名为SpeemQL的数据集。对该数据集的广泛实验评估表明,SpeemSQLNET可以直接从人类语音中直接综合高质量的SQL查询,优于各种竞争对手,以及在精确匹配的准确性方面的级联方法。
translated by 谷歌翻译
人工智能(AI)技术越来越多地用于数字正畸性,但其中一个挑战是自动准确地检测牙齿标志和轴。这部分是因为它们的复杂几何定义,部分原因是各个齿之间的大变化以及跨越不同类型的牙齿。因此,我们提出了一种深入的学习方法,通过专业牙医与标签数据集进行标记的数据集,以对牙齿模型的牙齿地标/轴检测,这对正畸治疗至关重要。我们的方法可以不仅提取点(例如CUSP)的形式提取牙齿地标,而且还可以提取牙齿地标,而且还可以测量牙齿角度和倾斜的轴。所提出的网络作为输入3D齿模型,并预测各种类型的牙齿地标和轴。具体地,我们将地标和轴编码为在齿模型表面上定义的致密字段。这种设计选择和一组添加的组件使得所提出的网络更适合于从给定的3D齿模型提取稀疏地标。对所提出的方法进行广泛评估,在经验丰富的牙医制备的一套牙科模型上进行。结果表明,我们的方法可以高精度地生产牙齿地标。我们通过与最先进的方法以及烧蚀研究进行了研究和证明我们的方法。
translated by 谷歌翻译
Masked image modeling (MIM) has shown great promise for self-supervised learning (SSL) yet been criticized for learning inefficiency. We believe the insufficient utilization of training signals should be responsible. To alleviate this issue, we introduce a conceptually simple yet learning-efficient MIM training scheme, termed Disjoint Masking with Joint Distillation (DMJD). For disjoint masking (DM), we sequentially sample multiple masked views per image in a mini-batch with the disjoint regulation to raise the usage of tokens for reconstruction in each image while keeping the masking rate of each view. For joint distillation (JD), we adopt a dual branch architecture to respectively predict invisible (masked) and visible (unmasked) tokens with superior learning targets. Rooting in orthogonal perspectives for training efficiency improvement, DM and JD cooperatively accelerate the training convergence yet not sacrificing the model generalization ability. Concretely, DM can train ViT with half of the effective training epochs (3.7 times less time-consuming) to report competitive performance. With JD, our DMJD clearly improves the linear probing classification accuracy over ConvMAE by 5.8%. On fine-grained downstream tasks like semantic segmentation, object detection, etc., our DMJD also presents superior generalization compared with state-of-the-art SSL methods. The code and model will be made public at https://github.com/mx-mark/DMJD.
translated by 谷歌翻译
Cohn and Umans proposed a framework for developing fast matrix multiplication algorithms based on the embedding computation in certain groups algebras. In subsequent work with Kleinberg and Szegedy, they connected this to the search for combinatorial objects called strong uniquely solvable puzzles (strong USPs). We begin a systematic computer-aided search for these objects. We develop and implement constraint-based algorithms build on reductions to $\mathrm{SAT}$ and $\mathrm{IP}$ to verify that puzzles are strong USPs, and to search for large strong USPs. We produce tight bounds on the maximum size of a strong USP for width $k \le 5$, construct puzzles of small width that are larger than previous work, and improve the upper bounds on strong USP size for $k \le 12$. Although our work only deals with puzzles of small-constant width, the strong USPs we find imply matrix multiplication algorithms that run in $O(n^\omega)$ time with exponent $\omega \le 2.66$. While our algorithms do not beat the fastest algorithms, our work provides evidence and, perhaps, a path to finding families of strong USPs that imply matrix multiplication algorithms that are more efficient than those currently known.
translated by 谷歌翻译
This paper presents a practical global optimization algorithm for the K-center clustering problem, which aims to select K samples as the cluster centers to minimize the maximum within-cluster distance. This algorithm is based on a reduced-space branch and bound scheme and guarantees convergence to the global optimum in a finite number of steps by only branching on the regions of centers. To improve efficiency, we have designed a two-stage decomposable lower bound, the solution of which can be derived in a closed form. In addition, we also propose several acceleration techniques to narrow down the region of centers, including bounds tightening, sample reduction, and parallelization. Extensive studies on synthetic and real-world datasets have demonstrated that our algorithm can solve the K-center problems to global optimal within 4 hours for ten million samples in the serial mode and one billion samples in the parallel mode. Moreover, compared with the state-of-the-art heuristic methods, the global optimum obtained by our algorithm can averagely reduce the objective function by 25.8% on all the synthetic and real-world datasets.
translated by 谷歌翻译
Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope.
translated by 谷歌翻译
Despite excellent performance in image generation, Generative Adversarial Networks (GANs) are notorious for its requirements of enormous storage and intensive computation. As an awesome ''performance maker'', knowledge distillation is demonstrated to be particularly efficacious in exploring low-priced GANs. In this paper, we investigate the irreplaceability of teacher discriminator and present an inventive discriminator-cooperated distillation, abbreviated as DCD, towards refining better feature maps from the generator. In contrast to conventional pixel-to-pixel match methods in feature map distillation, our DCD utilizes teacher discriminator as a transformation to drive intermediate results of the student generator to be perceptually close to corresponding outputs of the teacher generator. Furthermore, in order to mitigate mode collapse in GAN compression, we construct a collaborative adversarial training paradigm where the teacher discriminator is from scratch established to co-train with student generator in company with our DCD. Our DCD shows superior results compared with existing GAN compression methods. For instance, after reducing over 40x MACs and 80x parameters of CycleGAN, we well decrease FID metric from 61.53 to 48.24 while the current SoTA method merely has 51.92. This work's source code has been made accessible at https://github.com/poopit/DCD-official.
translated by 谷歌翻译
The task of referring video object segmentation aims to segment the object in the frames of a given video to which the referring expressions refer. Previous methods adopt multi-stage approach and design complex pipelines to obtain promising results. Recently, the end-to-end method based on Transformer has proved its superiority. In this work, we draw on the advantages of the above methods to provide a simple and effective pipeline for RVOS. Firstly, We improve the state-of-the-art one-stage method ReferFormer to obtain mask sequences that are strongly correlated with language descriptions. Secondly, based on a reliable and high-quality keyframe, we leverage the superior performance of video object segmentation model to further enhance the quality and temporal consistency of the mask results. Our single model reaches 70.3 J &F on the Referring Youtube-VOS validation set and 63.0 on the test set. After ensemble, we achieve 64.1 on the final leaderboard, ranking 1st place on CVPR2022 Referring Youtube-VOS challenge. Code will be available at https://github.com/Zhiweihhh/cvpr2022-rvos-challenge.git.
translated by 谷歌翻译